The Perceptron Algorithm Versus Winnow: Linear Versus Logarithmic Mistake Bounds when Few Input Variables are Relevant (Technical Note)
نویسندگان
چکیده
We give an adversary strategy that forces the Perceptron algorithm to make a( kN) mistakes in learning monotone disjunctions over N variables with at most k literals. In contrast, Littlestone’s algorithm Winnow makes at most 0( k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. In general, we call an algorithm additive if its weight vector is always a sum of a fixed initial weight vector and some linear combination of already seen instances. Thus, the Perceptron algorithm is an example of an additive algorithm. We show that an adversary can force any additive algorithm to make (N + k 1) /2 mistakes in learning a monotone disjunction of at most k literals. Simple experiments show that for k < N, Winnow clearly outperforms the Perceptron algorithm also on nonadversarial random data. @ 1997 Elsevier Science B.V.
منابع مشابه
Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms
We study online learning in Boolean domains using kernels which capture feature expansions equivalent to using conjunctions over basic features. We demonstrate a tradeoff between the computational efficiency with which these kernels can be computed and the generalization ability of the resulting classifier. We first describe several kernel functions which capture either limited forms of conjunc...
متن کاملPerceptron Mistake Bounds
We present a brief survey of existing mistake bounds and introduce novel bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds generalize beyond standard margin-loss type bounds, allow for any convex and Lipschitz loss function, and admit a very simple proof.
متن کاملA Multi-class Linear Learning Algorithm Related to Winnow with Proof
In this paper, we present Committee, a new multi-class learning algorithm related to the Winnow family of algorithms. Committee is an algorithm for combining the predictions of a set of sub-experts in the online mistake-bounded model of learning. A sub-expert is a special type of attribute that predicts with a distribution over a finite number of classes. Committee learns a linear function of s...
متن کاملA Multi-class Linear Learning Algorithm Related to Winnow
In this paper, we present Committee, a new multi-class learning algorithm related to the Winnow family of algorithms. Committee is an algorithm for combining the predictions of a set of sub-experts in the online mistake-bounded model of learning. A sub-expert is a special type of attribute that predicts with a distribution over a finite number of classes. Committee learns a linear function of s...
متن کاملUsing Linear-threshold Algorithms to Combine Multi-class Sub-experts
We present a new type of multi-class learning algorithm called a linear-max algorithm. Linearmax algorithms learn with a special type of attribute called a sub-expert. A sub-expert is a vector attribute that has a value for each output class. The goal of the multi-class algorithm is to learn a linear function combining the sub-experts and to use this linear function to make correct class predic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Artif. Intell.
دوره 97 شماره
صفحات -
تاریخ انتشار 1997